22 research outputs found

    Coverage of hospital-based cataract surgery and barriers to the uptake of surgery among cataract blind persons in nigeria: the Nigeria National Blindness and Visual Impairment Survey.

    No full text
    PURPOSE: To determine cataract surgical coverage, and barriers to modern cataract surgery in Nigeria. METHODS: Multistage stratified cluster random sampling was used to identify a nationally representative sample of 15,027 persons aged 40+ years. All underwent visual acuity testing, frequency doubling technology visual field testing, autorefraction, and measurement of best corrected vision if <6/12 in one or both eyes. An ophthalmologist examined the anterior segment and fundus through an undilated pupil for all participants. Participants were examined by a second ophthalmologist using a slit lamp and dilated fundus examination using a 90 diopter condensing lens if vision was <6/12 in one or both eyes, there were optic disc changes suggestive of glaucoma, and 1 in 7 participants regardless of findings. All those who had undergone cataract surgery were asked where and when this had taken place. Individuals who were severely visually impaired or blind from unoperated cataract were asked to explain why they had not undergone surgery. RESULTS: A total of 13,591 participants were examined (response rate 89.9%). Prevalence of cataract surgery was 1.6% (95% confidence interval 1.4-1.8), significantly higher among those aged ≥70 years. Cataract surgical coverage (persons) in Nigeria was 38.3%. Coverage was 1.7 times higher among males than females. Coverage was only 9.1% among women in the South-South geopolitical zone. Over one third of those who were cataract blind said they could not afford surgery (36%). CONCLUSIONS: Cataract surgical coverage in Nigeria was among the lowest in the world. Urgent initiatives are necessary to improve surgical output and access to surgery

    A Novel Multidimensional Reference Model For Heterogeneous Textual Datasets Using Context, Semantic And Syntactic Clues

    Full text link
    With the advent of technology and use of latest devices, they produces voluminous data. Out of it, 80% of the data are unstructured and remaining 20% are structured and semi-structured. The produced data are in heterogeneous format and without following any standards. Among heterogeneous (structured, semi-structured and unstructured) data, textual data are nowadays used by industries for prediction and visualization of future challenges. Extracting useful information from it is really challenging for stakeholders due to lexical and semantic matching. Few studies have been solving this issue by using ontologies and semantic tools, but the main limitations of proposed work were the less coverage of multidimensional terms. To solve this problem, this study aims to produce a novel multidimensional reference model using linguistics categories for heterogeneous textual datasets. The categories such context, semantic and syntactic clues are focused along with their score. The main contribution of MRM is that it checks each tokens with each term based on indexing of linguistic categories such as synonym, antonym, formal, lexical word order and co-occurrence. The experiments show that the percentage of MRM is better than the state-of-the-art single dimension reference model in terms of more coverage, linguistics categories and heterogeneous datasets.Comment: International Journal of Advanced Science and Applications, Volume 14, Issue 10, pp. 754-763, 202

    Search-Based Fairness Testing: An Overview

    Full text link
    Artificial Intelligence (AI) has demonstrated remarkable capabilities in domains such as recruitment, finance, healthcare, and the judiciary. However, biases in AI systems raise ethical and societal concerns, emphasizing the need for effective fairness testing methods. This paper reviews current research on fairness testing, particularly its application through search-based testing. Our analysis highlights progress and identifies areas of improvement in addressing AI systems biases. Future research should focus on leveraging established search-based testing methodologies for fairness testing.Comment: IEEE International Conference on Computing (ICOCO 2023), Langkawi Island, Malaysia, pp. 89-94, October 202

    A Novel Multidimensional Reference Model For Heterogeneous Textual Datasets Using Context, Semantic And Syntactic Clues

    Get PDF
    With the advent of technology and use of latest devices, they produces voluminous data. Out of it, 80% of the data are unstructured and remaining 20% are structured and semi-structured. The produced data are in heterogeneous format and without following any standards. Among heterogeneous (structured, semi-structured and unstructured) data, textual data are nowadays used by industries for prediction and visualization of future challenges. Extracting useful information from it is really challenging for stakeholders due to lexical and semantic matching. Few studies have been solving this issue by using ontologies and semantic tools, but the main limitations of proposed work were the less coverage of multidimensional terms. To solve this problem, this study aims to produce a novel multidimensional reference model using linguistics categories for heterogeneous textual datasets. The categories such context, semantic and syntactic clues are focused along with their score. The main contribution of MRM is that it checks each tokens with each term based on indexing of linguistic categories such as synonym, antonym, formal, lexical word order and co-occurrence. The experiments show that the percentage of MRM is better than the state-of-the-art single dimension reference model in terms of more coverage, linguistics categories and heterogeneous datasets

    HABCSm: A Hamming Based t-way Strategy based on Hybrid Artificial Bee Colony for Variable Strength Test Sets Generation

    Get PDF
    Search-based software engineering that involves the deployment of meta-heuristics in applicable software processes has been gaining wide attention. Recently, researchers have been advocating the adoption of meta-heuristic algorithms for t-way testing strategies (where t points the interaction strength among parameters). Although helpful, no single meta-heuristic based t-way strategy can claim dominance over its counterparts. For this reason, the hybridization of meta-heuristic algorithms can help to ascertain the search capabilities of each by compensating for the limitations of one algorithm with the strength of others. Consequently, a new meta-heuristic based t-way strategy called Hybrid Artificial Bee Colony (HABCSm) strategy, based on merging the advantages of the Artificial Bee Colony (ABC) algorithm with the advantages of a Particle Swarm Optimization (PSO) algorithm is proposed in this paper. HABCSm is the first t-way strategy to adopt Hybrid Artificial Bee Colony (HABC) algorithm with Hamming distance as its core method for generating a final test set and the first to adopt the Hamming distance as the final selection criterion for enhancing the exploration of new solutions. The experimental results demonstrate that HABCSm provides superior competitive performance over its counterparts. Therefore, this finding contributes to the field of software testing by minimizing the number of test cases required for test execution

    HausaNLP at SemEval-2023 Task 12: Leveraging African Low Resource TweetData for Sentiment Analysis

    Full text link
    We present the findings of SemEval-2023 Task 12, a shared task on sentiment analysis for low-resource African languages using Twitter dataset. The task featured three subtasks; subtask A is monolingual sentiment classification with 12 tracks which are all monolingual languages, subtask B is multilingual sentiment classification using the tracks in subtask A and subtask C is a zero-shot sentiment classification. We present the results and findings of subtask A, subtask B and subtask C. We also release the code on github. Our goal is to leverage low-resource tweet data using pre-trained Afro-xlmr-large, AfriBERTa-Large, Bert-base-arabic-camelbert-da-sentiment (Arabic-camelbert), Multilingual-BERT (mBERT) and BERT models for sentiment analysis of 14 African languages. The datasets for these subtasks consists of a gold standard multi-class labeled Twitter datasets from these languages. Our results demonstrate that Afro-xlmr-large model performed better compared to the other models in most of the languages datasets. Similarly, Nigerian languages: Hausa, Igbo, and Yoruba achieved better performance compared to other languages and this can be attributed to the higher volume of data present in the languages

    DATA SYNCHRONIZATION MODEL FOR HETEROGENEOUS MOBILE DEVICE DATABASES AND SERVER-SIDE DATABASE

    No full text
    Database synchronizationcan be definedas a process of establishing data equivalence among two or more data collections. Looking at how rapid mobile devices have evolved in terms ofvariations and sophistication, it is important to seek a solution that can utilize their individual capacities such as CPU, memory, bandwidth and processing power when performing similar tasks. In so doing, many research works have opted for development of solutions which are tailored to a particular vendor or vendor specific. For database synchronization, similar approach can 'be observed. This approacheven though effective, requiresextra step to be done by the server since each device type of different vendors will require special synchronization treatment. Hence, hi this study, a model which is caied Heterogeneous Mobile Database Synchronization Model (HMDSM) is proposed for enabling data synchronization between the heterogeneous mobile devices' and server-side's databases, such that the mobile devices (regardless of their individual differences) can synchronize their data with the server-side database

    Search-Based Fairness Testing: An Overview

    Get PDF
    Artificial Intelligence (AI) has demonstrated remarkable capabilities in domains such as recruitment, finance, healthcare, and the judiciary. However, biases in AI systems raise ethical and societal concerns, emphasizing the need for effective fairness testing methods. This paper reviews current research on fairness testing, particularly its application through search-based testing. Our analysis highlights progress and identifies areas of improvement in addressing AI systems’ biases. Future research should focus on leveraging established search-based testing methodologies for fairness testing

    Data Harmonization for Heterogeneous Datasets: A Systematic Literature Review

    No full text
    As data size increases drastically, its variety also increases. Investigating such heterogeneous data is one of the most challenging tasks in information management and data analytics. The heterogeneity and decentralization of data sources affect data visualization and prediction, thereby influencing analytical results accordingly. Data harmonization (DH) corresponds to a field that unifies the representation of such a disparate nature of data. Over the years, multiple solutions have been developed to minimize the heterogeneity aspects and disparity in formats of big-data types. In this study, a systematic review of the literature was conducted to assess the state-of-the-art DH techniques. This study aimed to understand the issues faced due to heterogeneity, the need for DH and the techniques that deal with substantial heterogeneous textual datasets. The process produced 1355 articles, but among them, only 70 articles were found to be relevant through inclusion and exclusion criteria methods. The result shows that the heterogeneity of structured, semi-structured, and unstructured (SSU) data can be managed by using DH and its core techniques, such as text preprocessing, Natural Language Preprocessing (NLP), machine learning (ML), and deep learning (DL). These techniques are applied to many real-world applications centered on the information-retrieval domain. Several assessment criteria were implemented to measure the efficiency of these techniques, such as precision, recall, F-1, accuracy, and time. A detailed explanation of each research question, common techniques, and performance measures is also discussed. Lastly, we present readers with a detailed discussion of the existing work, contributions, and managerial and academic implications, along with the conclusion, limitations, and future research directions
    corecore